85 research outputs found

    On determining the AND-OR hierarchy in workflow nets

    Get PDF
    This paper presents a notion of reduction where a WF net is transformed into a smaller net by iteratively contracting certain well-formed subnets into single nodes until no more of such contractions are possible. This reduction can reveal the hierarchical structure of a WF net, and since it preserves certain semantic properties such as soundness, can help with analysing and understanding why a WF net is sound or not. The reduction can also be used to verify if a WF net is an AND-OR net. This class of WF nets was introduced in earlier work, and arguably describes nets that follow good hierarchical design principles. It is shown that the reduction is confluent up to isomorphism, which means that despite the inherent non-determinism that comes from the choice of subnets that are contracted, the final result of the reduction is always the same up to the choice of the identity of the nodes. Based on this result, a polynomial-time algorithm is presented that computes this unique result of the reduction. Finally, it is shown how this algorithm can be used to verify if a WF net is an AND-OR net

    Causality and the semantics of provenance

    Full text link
    Provenance, or information about the sources, derivation, custody or history of data, has been studied recently in a number of contexts, including databases, scientific workflows and the Semantic Web. Many provenance mechanisms have been developed, motivated by informal notions such as influence, dependence, explanation and causality. However, there has been little study of whether these mechanisms formally satisfy appropriate policies or even how to formalize relevant motivating concepts such as causality. We contend that mathematical models of these concepts are needed to justify and compare provenance techniques. In this paper we review a theory of causality based on structural models that has been developed in artificial intelligence, and describe work in progress on a causal semantics for provenance graphs.Comment: Workshop submissio

    Cypher: An Evolving Query Language for Property Graphs

    Get PDF
    International audienceThe Cypher property graph query language is an evolving language, originally designed and implemented as part of the Neo4j graph database, and it is currently used by several commercial database products and researchers. We describe Cypher 9, which is the first version of the language governed by the openCypher Implementers Group. We first introduce the language by example, and describe its uses in industry. We then provide a formal semantic definition of the core read-query features of Cypher, including its variant of the property graph data model, and its " ASCII Art " graph pattern matching mechanism for expressing subgraphs of interest to an application. We compare the features of Cypher to other property graph query languages, and describe extensions, at an advanced stage of development, which will form part of Cypher 10, turning the language into a compositional language which supports graph projections and multiple named graphs

    Adaptive Process Management in Highly Dynamic and Pervasive Scenarios

    Get PDF
    Process Management Systems (PMSs) are currently more and more used as a supporting tool for cooperative processes in pervasive and highly dynamic situations, such as emergency situations, pervasive healthcare or domotics/home automation. But in all such situations, designed processes can be easily invalidated since the execution environment may change continuously due to frequent unforeseeable events. This paper aims at illustrating the theoretical framework and the concrete implementation of SmartPM, a PMS that features a set of sound and complete techniques to automatically cope with unplanned exceptions. PMS SmartPM is based on a general framework which adopts the Situation Calculus and Indigolog

    A Core Calculus for Provenance

    Get PDF
    Provenance is an increasing concern due to the ongoing revolution in sharing and processing scientific data on the Web and in other computer systems. It is proposed that many computer systems will need to become provenanceaware in order to provide satisfactory accountability, reproducibility, and trust for scientific or other high-value data. To date, there is not a consensus concerning appropriate formal models or security properties for provenance. In previous work, we introduced a formal framework for provenance security and proposed formal definitions of properties called disclosure and obfuscation. In this article, we study refined notions of positive and negative disclosure and obfuscation in a concrete setting, that of a general-purpose programing language. Previous models of provenance have focused on special-purpose languages such as workflows and database queries. We consider a higher-order, functional language with sums, products, and recursive types and functions, and equip it with a tracing semantics in which traces themselves can be replayed as computations. We present an annotation-propagation framework that supports many provenance views over traces, including standard forms of provenance studied previously. We investigate some relationships among provenance views and develop some partial solutions to the disclosure and obfuscation problems, including correct algorithms for disclosure and positive obfuscation based on trace slicing.

    Driving Innovation through Big Open Linked Data (BOLD): Exploring Antecedents using Interpretive Structural Modelling

    Get PDF
    YesInnovation is vital to find new solutions to problems, increase quality, and improve profitability. Big open linked data (BOLD) is a fledgling and rapidly evolving field that creates new opportunities for innovation. However, none of the existing literature has yet considered the interrelationships between antecedents of innovation through BOLD. This research contributes to knowledge building through utilising interpretive structural modelling to organise nineteen factors linked to innovation using BOLD identified by experts in the field. The findings show that almost all the variables fall within the linkage cluster, thus having high driving and dependence powers, demonstrating the volatility of the process. It was also found that technical infrastructure, data quality, and external pressure form the fundamental foundations for innovation through BOLD. Deriving a framework to encourage and manage innovation through BOLD offers important theoretical and practical contributions

    XPath satisfiability in the presence of DTDs

    Get PDF
    We study the satisfiability problem associated with XPath in the presence of DTDs. This is the problem of determining, given a query p in an XPath fragment and a DTD D, whether or not there exists an XML document T such that T conforms to D and the answer of p on T is nonempty. We consider a variety of XPath fragments widely used in practice, and investigate the impact of different XPath operators on the satisfiability analysis. We first study the problem for negation-free XPath fragments with and without upward axes, recursion and data-value joins, identifying which factors lead to tractability and which to NP-completeness. We then turn to fragments with negation but without data values, establishing lower and upper bounds in the absence and in the presence of upward modalities and recursion. We show that with negation the complexity ranges from PSPACE to EXPTIME. Moreover, when both data values and negation are in place, we find that the complexity ranges from NEXPTIME to undecidable. Furthermore, we give a finer analysis of the problem for particular classes of DTDs, exploring the impact of various DTD constructs, identifying tractable cases, as well as providing the complexity in the query size alone. Finally, we investigate the problem for XPath fragments with sibling axes, exploring the impact of horizontal modalities on the satisfiability analysis. © 2008 ACM

    The Linked Data Benchmark Council (LDBC): Driving competition and collaboration in the graph data management space

    Get PDF
    Graph data management is instrumental for several use cases such as recommendation, root cause analysis, financial fraud detection, and enterprise knowledge representation. Efficiently supporting these use cases yields a number of unique requirements, including the need for a concise query language and graph-aware query optimization techniques. The goal of the Linked Data Benchmark Council (LDBC) is to design a set of standard benchmarks that capture representative categories of graph data management problems, making the performance of systems comparable and facilitating competition among vendors. LDBC also conducts research on graph schemas and graph query languages. This paper introduces the LDBC organization and its work over the last decade
    • 

    corecore